- subspace iteration
- метод итераций в подпространстве, блочно-степенной метод
Англо-русский словарь промышленной и научной лексики. 2014.
Англо-русский словарь промышленной и научной лексики. 2014.
Arnoldi iteration — In numerical linear algebra, the Arnoldi iteration is an eigenvalue algorithm and an important example of iterative methods. Arnoldi finds the eigenvalues of general (possibly non Hermitian) matrices; an analogous method for Hermitian matrices is … Wikipedia
Power iteration — In mathematics, the power iteration is an eigenvalue algorithm: given a matrix A , the algorithm will produce a number lambda; (the eigenvalue) and a nonzero vector v (the eigenvector), such that Av = lambda; v .The power iteration is a very… … Wikipedia
Krylov subspace — In linear algebra the Krylov subspace generated by an n by n matrix, A , and an n vector, b , is the subspace mathcal{K} n spanned by the vectors of the Krylov sequence:::mathcal{K} n = operatorname{span} , { b, Ab, A^2b, ldots, A^{n 1}b }. , It… … Wikipedia
Generalized minimal residual method — In mathematics, the generalized minimal residual method (usually abbreviated GMRES) is an iterative method for the numerical solution of a system of linear equations. The method approximates the solution by the vector in a Krylov subspace with… … Wikipedia
Derivation of the conjugate gradient method — In numerical linear algebra, the conjugate gradient method is an iterative method for numerically solving the linear system where is symmetric positive definite. The conjugate gradient method can be derived from several different perspectives,… … Wikipedia
Principal component analysis — PCA of a multivariate Gaussian distribution centered at (1,3) with a standard deviation of 3 in roughly the (0.878, 0.478) direction and of 1 in the orthogonal direction. The vectors shown are the eigenvectors of the covariance matrix scaled by… … Wikipedia
Gram–Schmidt process — In mathematics, particularly linear algebra and numerical analysis, the Gram–Schmidt process is a method for orthogonalizing a set of vectors in an inner product space, most commonly the Euclidean space R n . The Gram–Schmidt process takes a… … Wikipedia
DIIS — (direct inversion in the iterative subspace or direct inversion of the iterative subspace), also known as Pulay mixing, is an extrapolation technique. DIIS was developed by Peter Pulay in the field of computational quantum chemistry with the… … Wikipedia
Lanczos algorithm — The Lanczos algorithm is an iterative algorithm invented by Cornelius Lanczos that is an adaptation of power methods to find eigenvalues and eigenvectors of a square matrix or the singular value decomposition of a rectangular matrix. It is… … Wikipedia
List of numerical analysis topics — This is a list of numerical analysis topics, by Wikipedia page. Contents 1 General 2 Error 3 Elementary and special functions 4 Numerical linear algebra … Wikipedia
Orthogonalization — In linear algebra, orthogonalization is the process of finding a set of orthogonal vectors that span a particular subspace. Formally, starting with a linearly independent set of vectors {v1,...,vk} in an inner product space (most commonly the… … Wikipedia